home *** CD-ROM | disk | FTP | other *** search
- 286-Based NetWare v2.1x File Service Processes
-
- The Final Word
-
- Jason Lamb
- Consultant
- Systems Engineering Division
-
- Abstract:
-
- This application note provides an in-depth explanation of File Service
- Processes (FSP) under 286-based NetWare v2.1x. This includes ELS Levels I
- and II; Advanced, dedicated and nondedicated; and SFT. Because of the way
- in which FSPs are allocated, this application note will also provide a
- detailed explanation of the RAM allocation in the DGroup data segment under
- 286-based NetWare v2.1x.
-
- Introduction
-
- The following application note is a preliminary excerpt from an upcoming
- Novell Systems Engineering Division Research report titled "NetWare
- Internals and Structure." The actual report may differ slightly from this
- excerpt; however, the content will be the same. As noted in the abstract,
- this particular excerpt provides an in-depth explanation of File Service
- Processes (FSP) under 286-based NetWare v2.1x. This application note will
- also provide a detailed explanation of the RAM allocation in the DGroup
- data segment under 286-based NetWare v2.1x.
-
- Note that NetWare 386 incorporates a completely different memory scheme
- than 286-based NetWare. As a result, none of the discussion of limitations
- or memory segments applies to NetWare 386.
-
- The most evident problem experienced by users regarding FSPs is a shortage
- of them. This problem has recently surfaced because of two reasons. The
- first has been the growing tendency toward building bigger and more complex
- server configurations. The second has been the addition of functionality
- and features to the NetWare operating system (OS). With the shortage of
- FSPs has come a variety of explanations for this problem, from both within
- and without Novell. While some of these explanations have provided some
- partially correct answers, this excerpt provides the actual mechanics and
- breakdown of this component of the 286-based NetWare operating system.
-
- After reading this report you should be able to understand all the factors
- affecting FSP allocation, as well as be able to recognize when a server has
- insufficient FSPs. Additionally you will have several options for dealing
- with FSP-starved servers.
-
- File Service Processes
-
- A File Service Process (FSP) is a process running in the NetWare server
- that services File Service Packets. These are typically NetWare Core
- Protocol (NCP) requests. Workstations, or clients, in a NetWare network
- request services from the file server through NCP requests. When a
- workstation wants to read a file, the NetWare shell builds a packet with
- the appropriate NCP request for reading the correct file, and then sends it
- off to the server.
-
- At the server, the NCP request is handed off to an FSP. The FSP processes
- the NCP request. It is the only process running in the NetWare server that
- can process an NCP request. At the server, the FSP processes the NCP
- request in one of two ways. It either processes the request immediately, or
- it can schedule additional processes that it will use for servicing the
- request.
-
- Because there are various processes with various lengths of run time that
- can be used in the servicing of a workstation's NCP request, FSPs become a
- potential bottleneck at the server. The following is an example:
-
- A workstation sends an NCP request that asks for a certain block of data to
- be read from the server's disk. The FSP servicing the NCP request schedules
- the appropriate process to retrieve the information from the disk, and then
- instructs this disk process to wake it up when it has the information. The
- FSP then goes to sleep waiting for completion of the disk process.
-
- If no other FSPs are available to run, no other NCP requests can be
- processed until this first request is finished. Until an FSP becomes
- available, the server is forced to process lower priority processes (if any
- are scheduled to run) until the disk request is completed and the FSP
- returns with another request. The server will also delay or ignore any new
- NCP requests that come in during this time period.
-
- It should be noted that an FSP will only go to sleep when it waits for
- information coming back from a disk request. There are typically no other
- processes in the NetWare server that the FSP uses which would cause the FSP
- to go to sleep.
-
- When a server does not have enough FSPs, performance will typically be
- degraded, especially in heavy data-movement environments (large file
- copies, database environments and so on). The problem that is created was
- depicted in the example above. The file server must process the NCP
- requests in a more serial fashion than parallel fashion, creating a longer
- waiting line for requests.
-
- (How many of us have expressed frustration at seeing only one bank teller
- servicing the waiting line on a Friday afternoon?)
-
- Additionally, because there is only a certain amount of buffer space
- available on a server for incoming packets, packets coming in after this
- buffer space is filled are trashed. The workstations must then spend more
- time resending requests, which reduces performance for the workstation and
- also reduces performance for the network due to the increased traffic over
- the cable.
-
- However, not all degradation can be attributed to a lack of FSPs, even in
- heavy data-movement environments. In some instances, bad or intermittent
- NICs, either at the server or at another node, can create the very same
- performance degradations.
-
- You can use FCONSOLE statistics to determine if your server has FSP
- problems:
-
- The FCONSOLE -> LAN I/O Stats -> File Service Used Route number, which
- means the number of File Service Packets that had to wait for an FSP, is
- one such statistic. You should note this number at one point in the day
- (preferably before major application usage or heavy server utilization),
- and then take another reading later (after heavy server utilization).
- Subtracting these two numbers will give you a total File Service Used Route
- number for this time period.
-
- At the same time, you should note the amount of File Service Packets (NCP
- requests), (on the same FCONSOLE screen) for both times mentioned above, to
- get the number of File Service Packets for this time period. You can then
- figure a percentage of File Service Used Route to File Service Packets
- ratio. Current information suggests this ratio should not exceed 10
- percent.
-
- Note that the File Service Used Route counter will roll over at 65,536. If
- you notice that your second measurement is less than your first, you will
- have rolled over the counter (at least once). You may need to add the
- second reading to 65,536 for your adjusted (true) second reading.
- Additionally, you should take these readings several times over the course
- of the day, or several days, using varying time periods, to see if there
- are any radical differences in the percentages. (As a last resort you can
- commit someone to continually monitor this particular FCONSOLE screen to
- see how many time the File Service Used Route counter does roll over.)
-
- The only condition under which this statistic will not give a true reading
- of FSP shortages applies if your server is configured with the Racal-
- Interlan NP600 NIC and you are using newer drivers supplied by Racal-
- Interlan. Due to the way the driver was written it will cause the File
- Service Used Route counter to increment for every File Service Packet
- delivered to the server, whether an FSP is available or not. Using this
- FCONSOLE method to determine FSP shortages will be misleading.
-
- Finally, we will see later that attaching an FSP amount to a particular
- option is inaccurate. For example, saying that a certain LAN board takes up
- two FSPs is inaccurate. What one server's FSP allocation is like can be
- radically different from another's. The most accurate information that can
- be given for server options is in DGroup bytes used, not FSPs.
-
- DGroup Data Segment
-
- The DGroup data segment is the most important segment of memory for the
- NetWare Operating System. It consists of a single 64KB block of RAM, which
- cannot be changed due to how pre-80386 Intel microprocessors segment RAM
- into 64KB blocks. This 64KB block of RAM exists (and is required for the
- server to even operate) in the smallest as well as the largest of server
- configurations. Adding or removing RAM does not affect this block at all.
- (Indeed this RAM is part of the Minimum RAM required specification of the
- NetWare OS.)
-
- The DGroup data segment contains various integral components that serve as
- the heart of the NetWare OS. These components are the Global Static Data
- area, the Process Stacks area, the Volume and Monitor Tables area, Dynamic
- Memory Pool 1 and the File Service Process Buffers area. The reasons that
- these components all reside within this 64KB data segment are mostly for
- performance advantages. In past versions of NetWare some components were
- removed from the DGroup Data segment in order to accommodate increased
- functionality as it was added to the OS. However, further removal of
- components from this area with the current version of the OS would
- necessitate major changes.
-
- The Global Static Data area contains all the global variables defined in
- the NetWare OS. This area also contains all of the global variables defined
- by the network interface card (NIC) drivers and the disk controller (or
- VADD) drivers.
-
- The Process Stacks area provides stack space for all of the various NetWare
- processes.
-
- The Volume and Monitor tables contain information for the Monitor screen of
- the file server, as well as information on all of the disk volumes mounted
- on the server.
-
- Dynamic Memory Pool 1 is used by virtually all NetWare processes and
- routines as either a temporary or semi-permanent workspace.
-
- The File Service Process buffers are the buffers where incoming File
- Service Packets are placed. An interesting side note is that FSPs are not a
- part of DGroup. However, the number of File Service Process buffers
- directly determines how many FSPs are allocated.
-
- The following graphic illustrates the five components and their minimum to
- maximum RAM allocation:
-
- : DGroup Data Segment
-
- The following sections of this report will go into more detail on each one
- of these DGroup components.
-
- Global Static Data:
- 28-39.3KB
-
- The Global Static Data area is typically the largest single segment of
- DGroup allocated.
-
- The Global Static Data area contains all of the global variables defined by
- the operating system code. This number has increased not only with each
- successive version of the OS, but for most revisions as well. A table of OS
- DGroup allocations is included for comparison.
-
- This area also contains all of the global variables defined in both the
- NetWare network interface card (NIC) drivers and the disk drivers. Tables
- for disk and NIC driver DGroup allocations are also included.
-
- When loading multiple NIC drivers, the variables are allocated in DGroup
- once for each NIC driver. If the same NIC driver is loaded twice, then the
- variables are allocated twice. For example if you configure two NE2000s
- into the OS, then the DGroup allocation is 812 bytes, (2 times 406 bytes).
-
- When loading multiple disk drivers, the variables are also allocated in
- DGroup once for each disk driver. However, if the same disk driver is
- loaded multiple times, the variables are still only allocated once. For
- example, if you configure the ISA disk driver and the Novell DCB driver
- into the OS, then the DGroup allocation is 292 plus 783, or 1,075 bytes.
- However if you configure two Novell DCBs into the OS, the DGroup allocation
- is only 783, and not 1,566 bytes.
-
- : Operating System / Disk Driver DGroup Allocation Tables
-
- : NIC Driver Code Sizes
-
- Determining Global Static Data Allocations
-
- You can determine the Global Static Data allocation of any given driver by
- using the Novell supplied linker program, NLINK.EXE, to generate a map
- file. You simply select the particular object file which corresponds to
- either the NIC driver, the disk driver or the NetWare OS object file
- desired, and then generate a map file using the following command:
-
- NLINK -m <output file> <object file>
-
- for example NLINK -m scsi.map scsi.obj
-
- The map output file would look like the one represented in . The DGroup
- Global Static variable allocations will be listed in the first portion of
- the file called the Segment Table. Search down the Segment Table listing
- until you come upon the line(s) which lists DATA in both the Name and Class
- columns. The size in bytes of the allocation will be represented by the
- hexadecimal number under the Len (length) column. Simply convert the
- hexadecimal number to decimal to get the byte size of the Global Static
- Data variables that this particular driver allocates.
-
- : NLINK SCSI Map Output File
-
- You can also generate a map file for an entire OS by using this same
- technique. Simply chain all the object files together in the command line
- like the following:
-
- NLINK -m net$os.map tts.obj cache.obj ane2000.obj bne2000.obj nullc.obj
- nulld.obj scsi.obj
-
- This is an example of all the object files that are used to create a
- NET$OS.EXE file given this particular configuration. You can see what your
- particular configuration is from looking at the existing NET$OS.LNK file
- from the AUXGEN diskette. This file will list all of the object files used
- when you last generated the OS. The resulting map file is much larger than
- the one generated when using a single driver, but it is read in exactly the
- same way as shown in .
-
- : NLINK NET$OS Map Output File
-
- Process Stacks:
- 7-10.5KB
-
- There is a stack area allocated for each NetWare server process. Each of
- the stacks range from 80 to 1,000 bytes. The following are the stack space
- requirements for the NetWare processes:
-
- Standard Operating System processes: 7,136 bytes
-
- TTS Stack: 250 bytes
-
- Print Spooler Stack: 668 bytes
-
- (This is allocated once for each port spooled in NETGEN)
-
- Note that the print spooler stacks are created for a spooled port that is
- defined in NETGEN. Print servers and print queues do not affect FSP
- allocation.
-
- Volume & Monitor Tables:
- 0.2-11.5KB
-
- The Monitor table is used by the console to store information required to
- display the Monitor screen. This table size is fixed, not configurable.
-
- Monitor Table Size: 84 bytes
-
- The Volume table is used to maintain information on each of the disk
- volumes mounted on the server. The size of memory allocated for this table
- is dependent on the size of the mounted volumes, as well as the amount of
- directory entries allocated in NETGEN.
-
- Please note that mounted volume size is used for these tables so mirrored
- drives are not counted twice. The following is the Volume table memory
- requirements:
-
- For each volume mounted on the server: 84.00 bytes
-
- For each MB of disk space mounted: 1.75 bytes
-
- For each 18 directory entries on all volumes: 1.00 byte
-
- (These numbers are rounded to the next highest integer.)
-
- For example, if you had a server with only one volume (SYS:) mounted, with
- a size of 145MB and 9,600 directory entries, the Volume and Monitor tables
- would require:
-
- 84 + (1*84) + (145*1.75) + (9600/18) bytes of DGroup memory
- or 956 bytes (rounded up)
-
- Dynamic Memory Pool 1:
- 16-20.9KB
-
- Dynamic Memory Pool 1 (DMP 1) is used by virtually all NetWare processes
- and routines as temporary workspace. Workspace from 2 to 1,024 bytes (with
- 128 being the average) is allocated to a NetWare process. This workspace is
- used, then recovered upon completion.
-
- Additionally, there are several NetWare processes and routines that hold
- memory allocated out of DMP 1, either on a semi-permanent basis, or until
- the process or routine finishes. A table of these semi-permanent DMP 1
- allocations is included.
-
- If there is no DMP 1 RAM available for a process or routine, the
- workstation will likely display "Network Error: out of dynamic workspace
- during <operation>," where <operation> refers to the name of the DOS call
- being tried. In some instances, with some versions of the OS, running out
- of DMP 1 RAM can cause print jobs to either disappear (until more DMP 1 RAM
- is released) or be lost completely. It has also been reported that under
- some earlier versions of 286-based NetWare, running out of DMP 1 RAM can
- cause the server to lock up without displaying an ABEND error message.
-
- Please note that references to "dynamic workspace" in an error message can
- refer to the unavailability of RAM in either Dynamic Memory Pools 1, 2 or
- 3. Use the FCONSOLE => Summary => Statistics screen to determine the exact
- memory pool involved.
-
- : Semi-Permanent Dynamic Memory Pool 1 Allocations
-
- Please note that the DMP 1 allocations marked with an asterisk (*) are, in
- practical terms, permanent allocations. To change these allocation requires
- some type of server reconfiguration.
-
- File Service Process Buffers:
- 1.5-12.8 KB
-
- The File Service Process buffers are the buffers allocated in DGroup for
- incoming File Service Packets. The number of FSP buffers available
- determines how many FSPs your server will have in a one to one
- relationship. If you have four FSP buffers available, then you will have
- four FSPs. The maximum FSPs available for any server configuration is 10.
- The following is the breakdown of memory requirements for each FSP buffer:
-
- Reply buffer 94 bytes
-
- Workspace 106 bytes
-
- Stack space 768 bytes
-
- Receive buffer 512-4,096 bytes
-
- The total size of the FSP buffer depends on the largest packet size of any
- NIC driver configured into the operating system. The exact packet size
- constitutes the receive buffer portion of the FSP buffer.
-
- For example, if you have configured an Ethernet driver with a packet size
- of 1024 bytes, and an ARCNET driver using 4,096 byte packets, then the FSP
- buffers for this server will be 5,064 bytes:
-
- (4096 + 768 + 106 + 94)
-
- This shows that configuring a server with NIC drivers of different packet
- sizes can be very inefficient. You can optimize this by keeping all of the
- NIC driver packet sizes the same size.
-
- Additional File Service Process Buffer RAM
-
- There is also a one-time allocation of a single additional reply buffer of
- 94 bytes. Lastly, if any NIC driver configured into the OS supports DMA
- access, there may be additional memory that will be set aside, unused.
-
- Additional reply buffer 94 bytes
-
- Memory set aside for DMA workaround 0-4,095 bytes
-
- This is due to the fact that in some PCs, the DMA chip cannot handle
- addresses correctly across physical 64KB RAM boundaries. If the receive
- buffer of an FSP buffer straddles a physical 64KB RAM boundary, the OS will
- skip the memory (depending on the size of the receive buffer this could be
- 0 to 4,095 bytes) and not use it. This problem can be erased by changing to
- a non-DMA NIC driver. It is also conceivable that changing the Volume
- Tables can shift the data structures enough to avoid this straddling. The
- following graphic depicts this workaround.
-
- : DMA Workaround
-
- : NIC Driver Packet and DGroup Buffer Sizes (bytes)
-
- This table shows various NIC drivers, and their packet and FSP buffer
- sizes, and whether or not they use DMA. Following tables will show the
- exact steps in allocating DGroup RAM, options which have the biggest impact
- on DGroup RAM allocation and some steps for alleviating FSP shortages.
-
- DGroup RAM Allocation, Troubleshooting and Management
-
- DGroup RAM Allocation Process
-
- The following is the step-by-step process of allocating RAM in DGroup:
-
- 1) The OS first allocates the Global Static Data Area of DGroup. This
- includes OS, NIC and disk variables.
-
- 2) The Process stacks are allocated.
-
- 3) The Volume and Monitor tables are allocated.
-
- 4) 16KB is set aside for Dynamic Memory Pool 1.
-
- 5) The remaining DGroup RAM is used to set up File Service Process buffers.
-
- 6) 94 bytes are set aside as an additional reply buffer.
-
- 7) 0-4,095 bytes may be set aside (unused) if any installed NIC driver
- supports DMA.
-
- 8) The remaining RAM is divided by the total FSP buffer size up to a
- maximum of 10.
-
- 9) The remaining DGroup RAM that could not be evenly made into an FSP
- buffer is added to Dynamic Memory Pool 1.
-
- A server configured with an NE2000 NIC and an SMC ARCNET NIC, using the
- Novell driver, would require an FSP buffer size of 1,992 which is the FSP
- buffer size of the larger packet size NIC. (The NE2000 has a 1,024 byte
- packet size as opposed to the 512 byte packet size of the ARCNET card).
-
- If after allocating all the prior DGroup data structures, there remain
- 7,500 bytes of DGroup available for FSP buffers, the allocation would be as
- follows:
-
- Subtract 94 bytes from the 7,500 for the one additional reply buffer and
- divide the remainder by the 1,992 FSP buffer size. This gives 3 FSP buffers
- and a remainder of 1,430 to be added to Dynamic Memory Pool 1. The
- following is the computation:
-
- (7500 - 94) / 1992 = 3 with a remainder of 1430
-
- Once understanding this process, it becomes very easy to see how close any
- particular server is to gaining another FSP.
-
- 1) Figure out the FSP buffer size.
-
- 2) Take the maximum RAM in Dynamic Memory Pool 1 and subtract it from
- 16,384 bytes. (The fixed size of DMP 1.)
-
- 3) Subtract that difference from the FSP buffer size.
-
- This is how many bytes short that server configuration is from gaining an
- additional FSP.
-
- For example, if the server configuration has a 1,992 FSP buffer size and
- the maximum DMP 1 is 16,804, then you can figure that to get an additional
- FSP you would have to free up an additional 1,572 bytes of DGroup. The
- following is the computation:
-
- 1992 - (16804 - 16384) = 1572
-
- Two solutions for this configuration would be the following:
-
- Remove three spooled printer ports (3 * 668 = 2004)
-
- Or reduce directory entries by 28,296 (28296 / 18 = 1572)
-
- Either of these would free up the necessary DGroup RAM; however, if current
- DMP 1 RAM usage is close to the current maximum, the best choice would be
- the option that leaves you with additional remainder RAM for DMP 1. That
- choice would be removing the three spooled printer ports, which gives you
- 432 bytes for DMP 1.
-
- Troubleshooting
-
- The following items have the biggest impact on DGroup RAM allocation. They
- are listed in order of importance:
-
- 1) NIC driver packet size
-
- 2) Amount of disk space mounted and directory entries allocated
-
- 3) NIC and disk driver global variables
-
- 4) Possible RAM loss to DMA workaround
-
- 5) Spooled ports defined in NETGEN
-
- 6) TTS
-
- The following is the methodology used to define this list:
-
- 1) The NIC driver packet size has the most significant impact on the
- allocation of FSPs because it determines what divisor is used to allocate
- FSP buffers. The larger the packet size, the larger the FSP buffer and the
- smaller the amount of FSPs.
-
- 2) Disk configurations have the second largest impact on DGroup RAM
- allocation. Mounting a single volume of 2GB with 10,000 directory entries
- would require 13,584 bytes of DGroup RAM alone.
-
- 3) These NIC and disk variables can be significant in size. The Async WNIM
- driver alone requires 9,942 bytes of DGroup RAM.
-
- 4) The maximum DGroup RAM that can be lost to this workaround is 4,095
- bytes.
-
- 5) The maximum DGroup RAM that can be allocated to print spooler stacks is
- 3,340 bytes.
-
- 6) The TTS process stack uses 250 bytes of DGroup RAM. Also, Global Static
- Data variables for TTS can run from 142 to 152 additional bytes.
-
- DGroup RAM Management Methods
-
- 1) Remove spooled ports in NETGEN
-
- 2) Decrease directory entries
-
- Warning: Before decreasing directory entries, please read the Directory
- Entries section later in this report. If you incorrectly reduce your
- directory entries, it is possible that it will result in lost files,
- directory structures and trustee priveleges.
-
- 3) Change NIC drivers to non-DMA ones
-
- 4) Decrease NIC driver packet size
-
- 5) Remove TTS
-
- 6) Decrease disk space
-
- 7) Use Dynamic Memory Pool 1 patch for qualified servers
-
- Warning: Before using the DMP 1 patch, please read the Dynamic Memory Pool
- 1 Patch section later in this report. If you incorrectly use the patch, it
- is possible that it will result in a dysfunctional or inoperable server.
-
- This order was chosen because the impact on server configurations was
- deemed minimal in choice 1 and increases as the list increases. However, it
- goes without saying that some of these changes are impossible for certain
- configurations. For example, removing spooled printers from your server may
- be prohibitive or impossible.
-
- TTS removal was placed fifth in this list and merits special mention due to
- this fact. Even though you are not using implicit or explicit TTS with your
- applications, the NetWare OS will still use TTS services for specific
- operations. For example, when TTS is installed, the NetWare OS will make
- use of it when performing updates to the NetWare bindery files. In the
- event of a server failure, this would protect against corruption due to
- incomplete bindery updates.
-
- Directory Entries
-
- When a 286-based NetWare server sets up a directory entry block based upon
- the amount of directory entries defined in NETGEN, it allocates that as one
- block. As directory entries are used up (by files, directories and trustee
- privileges), the block is filled up sequentially. As the directory entry
- block is filled, NetWare keeps track of the peak directory entry used. This
- is the highest number entry used in the directory entry block.
-
- However, directory entries are added sequentially only as long as prior
- directory entries are not deleted. When directory entries are deleted,
- holes in the directory entry block are created that are filled by
- subsequent new directory entries.
-
- : New Server Directory Entry Block
-
- : Used Server Directory Entry Block
-
- This directory entry block fragmentation is not defragmented either under
- NetWare or after running VREPAIR. To determine the amount of directory
- block fragmentation, perform the following:
-
- 1) Pull up the FCONSOLE => Statistics => Volume Information => (select a
- volume) screen.
-
- 2) Take the Maximum Directory Entries number and subtract the Peak
- Directory Entries Used number from it. This is the number of free directory
- entries that can be safely manipulated via NETGEN without loss of files.
-
- 3) Next take the Current Free Directory Entries number and subtract the
- number from step two. This number is the amount of free directory entries
- that are inside your live block of directory entries. This number indicates
- how much fragmentation of your directory block exists. The higher
- proportion of free directory entries you have inside your live block of
- directory entries (the number from step 3) to the ones you have outside
- your live block (the number from step 2), represents a more fragmented
- directory block.
-
- When you down a server and run NETGEN in order to reduce directory entries,
- the directory entry block is simply truncated to the new number. NETGEN
- does not check to see if directory entries that are about to be deleted are
- in use. If you have a fragmented directory block and you reduce the
- directory entry block based upon the amount of free directory entries you
- have available, it is likely that you will be deleting directory entries
- that are in use.
-
- This will cause files, directories and trustee privileges that were defined
- in those directory entries to be lost. Running VREPAIR reestablishes
- directory entries for those lost files and some of the directory structure,
- and will save most files in the root directory with names that are created
- by VREPAIR. These names are typically something like VF000000.000. For a
- more complete description of VREPAIR program, consult section eight,
- running the VREPAIR utility, in the SFT/Advanced NetWare 286 Maintenance
- manual.
-
- If you do not wish to manually defragment the server's directory block by
- backing up and then restoring the entire volume, then the only safe number
- to use in determining how much you can reduce your directory entries by is
- your Peak Directory Entries Used number. You can reduce your total
- directory entries to near this number, without loss of files. You can
- quickly calculate if manipulating the directory entries will buy your
- server any FSPs by checking your Maximum Directory Entries number against
- your Peak Directory Entries Used number. The difference between these two
- represents the amount of directory entries you can manipulate. You can then
- calculate if reducing this number can give you additional FSPs.
-
- If you recognize that you have a significant amount of directory block
- fragmentation, you can elect to manually defragment it using the following
- method:
-
- 1) Make a complete backup of the volume(s) whose directory entry block you
- wish to defragment.
-
- 2) Make a note of how many directory entry blocks you have used.
-
- Do this by rerunning FCONSOLE and selecting the Statistics => Volume
- Information => (select a volume) screen. Next, take the amount of Maximum
- Directory Entries and subtract the number of Current Free Directory Entries
- from it. This is the number of directory entries that you have used. In
- order to complete a full restore, you will need to allocate at least this
- many directory entries in NETGEN.
-
- (You can also get this number by running CHKVOL on your volume from the
- command line.)
-
- You should now calculate how many total directory entries you wish to have.
- If you do not have a set number in mind, take the number of used directory
- entries and add half again to it. For example, if you have used 6,000
- directory entries, allocating 9,000 is a good start.
-
- 3) Down the server and rerun NETGEN.
-
- 4) Reinitialize the selected volume(s).
-
- 5) Reset the number of directory entries to the number you calculated from
- step two.
-
- 6) Restore your volume(s) from your backup.
-
- A final note regarding directory entries concerns NetWare servers that are
- being used for Macintosh file storage. Currently, using the Peak Directory
- Entry Used number for calculating how many directory entries you may safely
- reduce without loss of files, will not work for these servers. This is due
- to the fact that when Macintosh files are created on a NetWare server, the
- resource fork portion of each Macintosh file does not increment the Peak
- Directory Entry Used counter. As a result, the only way to reduce directory
- entries for such servers is to use the manual directory block defragment
- method outlined above.
-
- Dynamic Memory Pool 1 Patch
-
- Finally, be aware that Novell has a patch fix for some FSP-starved server
- configurations. This patch has been available through LANSWER technical
- support for qualified configurations. This is the only Novell supported
- method for distributing this patch. This patch is also available from other
- sources in an unsupported fashion. Warnings should precede its use.
-
- The patch consists of three files, a general purpose debug type program
- called PATCH.EXE, a patch file to be used with the PATCH.EXE program and a
- README file. You can read the PATCH program instructions for a further
- explanation of the PATCH program, by simply running the PATCH.EXE program
- without any parameters. The patch program works by taking the patch
- instruction file and patching the specified file. The patch instruction
- file consists of three lines - a pattern line, an offset and a patch line.
- The following is the Novell supplied patch instruction file called
- SERVPROC.
-
- : Novell Dynamic Memory Pool 1 Patch Instructions
-
- The patch program will search the specified file (one of the OS .OBJ files)
- for the pattern 8B E0 05 00 1F and will replace the byte 1F with 08.
-
- This changes a portion of the fixed size of Dynamic Memory Pool 1. The 1F
- bytes represents a fixed size of this portion of DMP 1 of 1F00h or 7,936
- bytes. The patch program then changes that byte to 08 which represents a
- fixed size of this portion of DMP 1 of 800h or 2,048 bytes. This change
- means that DMP 1 will be reduced by 5,888 bytes or about 5.75KB.
-
- (7936 - 2048 = 5888)
-
- If you understand the operation of the patch you can change the number of
- bytes in the decrease of DMP 1 by changing the patch line of the patch
- instruction file. In other words, you can use any value beginning at 1F and
- decreasing to 00 in place of the 08. This will decrease DMP 1 in 256 byte
- increments as follows:
-
- 1E Decrease DMP 1 by 256 bytes
-
- 1D Decrease DMP 1 by 512 bytes
-
- 1C Decrease DMP 1 by 768 bytes
-
- 1B Decrease DMP 1 by 1,024 bytes
-
- ...
-
- 08 Decrease DMP 1 by 5,888 bytes
-
- ...
-
- 00 Decrease DMP 1 by 7,936 bytes
-
- Remember that the hex number you are changing is the two digit hex number
- patch value followed by 00. In the first case you are patching the number
- 1F00h (the original value of 7,936 bytes) to 1E00h (or 7,680 bytes).
- Subtracting the two gives the difference of 256 bytes.
-
- It is conceivable that this patch can be used to increase the fixed size of
- this portion of DMP 1. You could increase this fixed size by using numbers
- greater than 1F. Again, you would be increasing this fixed size in 256 byte
- increments for every single hex number increment above 1F. However, it is
- not known if using the patch in this manner will result in an operating
- system that will even work. Due to the untested nature of this, as well as
- the unknown ramifications to other parts of the operating system, it is
- strongly recommended that you do not attempt to increase this portion's
- fixed size of DMP 1 with this patch.
-
- It is also strongly recommended that you do not alter the patch line
- numbers to decrease DMP 1 RAM by a larger value than the one supplied with
- the patch. Just as you can produce a server OS that will not run by
- attempting to increase DMP 1, you can also produce a server OS that will
- not run by attempting to decrease DMP 1 too much.
-
- If you have not performed the DGroup RAM calculations and you do not know
- exactly what you are gaining in FSPs and losing in DMP 1, you should not
- alter the patch. If you do alter the patch value, do the calculations on
- paper first and remember the numbers involved.
-
- Use the number you calculated from page 17 that told you how many bytes you
- were short from gaining an additional FSP. At the minimum, your patch value
- must be greater than that number or you will be accomplishing nothing by
- patching the OS.
-
- The current patch value of 08 was arrived at because it will provide the
- following FSP gains in a minimum to maximum range of FSPs:
-
- 512KB Packet NIC Drivers 3-4 Additional FSPs
-
- 1024KB Packet NIC Drivers 2-3 Additional FSPs
-
- 2048KB Packet NIC Drivers 1-2 Additional FSPs
-
- 4096KB Packet NIC Drivers 1-2 Additional FSPs
-
- The warnings for use of this patch should be self evident by now. If you
- run short of Dynamic Memory Pool 1 RAM you will get erratic and sometimes
- fatal server behavior. Also, altering the patch numbers is not a guaranteed
- or supported function of the patch. These numbers and this explanation were
- arrived at based on our understanding of how the patch works, and then by
- performing the calculations. If you feel the need to use the patch, you
- should use it as it is supplied.
-
- Prior to sending a user the patch, LANSWER guarantees that the peak RAM
- used of Dynamic Memory Pool 1 is at least 6KB greater than the maximum
- allocated. If you receive the patch through other means you should, at the
- minimum, check those numbers yourself.
-
- Due to the nature of fixes that involve patching the operating system, it
- is recommended that you try all prior means of curing an FSP-starved server
- before resorting to this fix.
-
- Final Notes
-
- 286-based NetWare v2.1x was designed on the principal of enhanced client
- server computing being the foundation of future computer networking.
- Therefore, technologies such as a non-preemptive server OS, disk mirroring,
- transaction tracking and hardware independence, all implemented with
- exceptional speed and security, formed the basis of the technology and
- design that went into 286-based NetWare. The fact that almost all other
- current network implementations are now borrowing heavily from these ideas
- matters little. The additional fact that more and more users are placing
- larger amounts of computer resources into their NetWare LANs only reaffirms
- the sound concepts behind the design. However, as with any design,
- limitations define the playing field.
-
- One conclusion that can be drawn from this report is echoed on the final
- page of the SFT NetWare 286 In-Depth Product Definition:
-
- "*NOTE: Maximums listed are individual limits. You will not be able to use
- these specifications at maximum levels at all times. You may have to
- purchase additional hardware to achieve some of these maximums."
-
- After understanding the relationship between the separate DGroup RAM
- components and the File Service Processes, it becomes evident that setting
- up an Advanced NetWare 286 server with 100 workstations, 2GB of disk space
- and four NICs, is inadvisable. Many of the limitations for this type of
- configuration can be largely attributed to many of the current hardware
- limitations.
-
- NetWare design technologies remain firmly focused at furthering network
- computing. NetWare 386 is the next logical design step. The fact is that
- NetWare 386 introduces a completely new memory scheme that renders all of
- the current discussion on FSP and DGroup limitations academic. Using a
- completely dynamic memory and module management system, NetWare 386 is
- beginning to introduce a new set of features and technologies that will
- represent the new standard for computer networks. And as the next
- generation of computer hardware becomes available, we will find that
- NetWare 386 will be ready and waiting.
-